Forex Heatmap█ OVERVIEW
This indicator creates a dynamic grid display of currency pair cross rates (exchange rates) and percentage changes, emulating the Cross Rates and Heat Map widgets available on our Forex page. It provides a view of realtime exchange rates for all possible pairs derived from a user-specified list of currencies, allowing users to monitor the relative performance of several currencies directly on a TradingView chart.
█ CONCEPTS
Foreign exchange
The Foreign Exchange (Forex/FX) market is the largest, most liquid financial market globally, with an average daily trading volume of over 5 trillion USD. Open 24 hours a day, five days a week, it operates through a decentralized network of financial hubs in various major cities worldwide. In this market, participants trade currencies in pairs , where the listed price of a currency pair represents the exchange rate from a given base currency to a specific quote currency . For example, the "EURUSD" pair's price represents the amount of USD (quote currency) that equals one unit of EUR (base currency). Globally, the most traded currencies include the U.S. dollar (USD), Euro (EUR), Japanese yen (JPY), British pound (GBP), and Australian dollar (AUD), with USD involved in over 87% of all trades.
Understanding the Forex market is essential for traders and investors, even those who do not trade currency pairs directly, because exchange rates profoundly affect global markets. For instance, fluctuations in the value of USD can impact the demand for U.S. exports or the earnings of companies that handle multinational transactions, either of which can affect the prices of stocks, indices, and commodities. Additionally, since many factors influence exchange rates, including economic policies and interest rate changes, analyzing the exchange rates across currencies can provide insight into global economic health.
█ FEATURES
Requesting a list of currencies
This indicator requests data for every valid currency pair combination from the list of currencies defined by the "Currency list" input in the "Settings/Inputs" tab. The list can contain up to six unique currency codes separated by commas, resulting in a maximum of 30 requested currency pairs.
For example, if the specified "Currency list" input is "CAD, USD, EUR", the indicator requests and displays relevant data for six currency pair combinations: "CADUSD", "USDCAD", "CADEUR", "EURCAD", "USDEUR", "EURUSD". See the "Grid display" section below to understand how the script organizes the requested information.
Each item in the comma-separated list must represent a valid currency code. If the "Currency list" input contains an invalid currency code, the corresponding cells for that currency in the "Cross rates" or "Heat map" grid show "NaN" values. If the list contains empty items, e.g., "CAD, ,EUR, ", the indicator ignores them in its data requests and calculations.
NOTE: Some uncommon currency pair combinations might not have data feeds available. If no available symbols provide the exchange rates between two specified currencies, the corresponding table cells show "NaN" results.
Realtime data
The indicator retrieves realtime market prices, daily price changes, and minimum tick sizes for all the currency pairs derived from the "Currency list" input. It updates the retrieved information shown in its grid display after new ticks become available to reflect the latest known values.
NOTE: Pine scripts execute on realtime bars only when new ticks are available in the chart's data feed. If no new updates are available from the chart's realtime feed, it may cause a delay in the data the indicator receives.
Grid display
This indicator displays the requested data for each currency pair in a table with cells organized as a grid. Each row name corresponds to a pair's base currency , and each column name corresponds to a quote currency . The cell at the intersection of a specific row and column shows the value requested from the corresponding currency pair.
For example, the cell at the intersection of a "EUR" row and "USD" column shows the data retrieved for the "EURUSD" currency pair, and the cell at the "USD" row and "EUR" column shows data for the inverse pair ("USDEUR").
Note that the main diagonal cells in the table, where rows and columns with the same names intersect, are blank. The exchange rate from one currency to itself is always 1, and no Forex symbols such as "EUREUR" exist.
The dropdown input at the top of the "Settings/Inputs" tab determines the type of information displayed in the table. Two options are available: "Cross rates" and "Heat map" . Both modes color their cells for light and dark themes separately based on the inputs in the "Colors" section.
Cross rates
When a user selects the "Cross rates" display mode, the table's cells show the latest available exchange rate for each currency pair, emulating the behavior of the Cross Rates widget. Each cell's value represents the amount of the quote currency (column name) that equals one unit of the base currency (row name). This display allows users to compare cross rates across currency pairs, and their inverses.
The background color of each cell changes based on the most recent update to the exchange rate, allowing users to monitor the direction of short-term fluctuations as they occur. By default, the background turns green (positive cell color) when the cross rate increases from the last recorded update and red (negative cell color) when the rate decreases. The cell's color reverts to the chart's background color after no new updates are available for 200 milliseconds.
Heat map
When a user selects the "Heat map" display mode, the table's cells show the latest daily percentage change of each currency pair, emulating the behavior of the Heat Map widget.
In this mode, the background color of each cell depends on the corresponding currency pair's daily performance. Heat maps typically use colors that vary in intensity based on the calculated values. This indicator uses the following color coding by default:
• Green (Positive cell color): Percentage change > +0.1%
• No color: Percentage change between 0.0% and +0.1%
• Bright red (Negative cell color): Percentage change < -0.1%
• Lighter/darker red (Minor negative cell color): Percentage change between 0.0% and -0.1%
█ FOR Pine Script™ CODERS
• This script utilizes dynamic requests to iteratively fetch information from multiple contexts using a single request.security() instance in the code. Previously, `request.*()` functions were not allowed within the local scopes of loops or conditional structures, and most `request.*()` function parameters, excluding `expression`, required arguments of a simple or weaker qualified type. The new `dynamic_requests` parameter in script declaration statements enables more flexibility in how scripts can use `request.*()` calls. When its value is `true`, all `request.*()` functions can accept series arguments for the parameters that define their requested contexts, and `request.*()` functions can execute within local scopes. See the Dynamic requests section of the Pine Script™ User Manual to learn more.
• Scripts can execute up to 40 unique `request.*()` function calls. A `request.*()` call is unique only if the script does not already call the same function with the same arguments. See this section of the User Manual's Limitations page for more information.
• Typically, when requesting higher-timeframe data with request.security() using barmerge.lookahead_on as the `lookahead` argument, the `expression` argument should use the history-referencing operator to offset the series, preventing lookahead bias on historical bars. However, the request.security() call in this script uses barmerge.lookahead_on without offsetting the `expression` because the script only displays results for the latest historical bar and all realtime bars, where there is no future information to leak into the past. Instead, using this call on those bars ensures each request fetches the most recent data available from each context.
• The request.security() instance in this script includes a `calc_bars_count` argument to specify that each request retrieves only a minimal number of bars from the end of each symbol's historical data feed. The script does not need to request all the historical data for each symbol because it only shows results on the last chart bar that do not depend on the entire time series. In this case, reducing the retrieved bars in each request helps minimize resource usage without impacting the calculated results.
Look first. Then leap.
Search in scripts for "THE SCRIPT"
Ema Short Long Indicator[CHE]█ CONCEPTS
This Pine Script is an EMA Short Long indicator that displays the crossing EMA lines on the chart. The indicator uses three exponential moving averages (EMAs) to generate the buy and sell signals. The EMA lines are plotted as green (uptrend) and red (downtrend) lines. When the green line is above the white signal line, the indicator generates a buy signal, when the green line is below the white signal line, the indicator generates a sell signal. Arrows are also displayed marking the buy and sell signals. There is also an option to allow indicator repainting or not. Finally, users can also set alerts to be alerted to potential trading opportunities.
Note: please do not disable "time frame gaps". Allows to calculate the indicator on a Timeframe (TF) different from that of the chart Time window. The TF should ideally be higher than the charts to provide a broader perspective than
the TF of the chart. Using TFs lower than the chart's will deliver fragmentary results, since only the last value of intrabar is displayed (multiple values cannot be displayed for a single chart bar). The Gaps setting determines the behavior when the TF is higher than the TF of the chart. If 'gaps' is checked, higher TF values only come in and are interconnected on the diagram when the higher TF completed. This has the advantage of avoidance Real-time epainting. If Gaps is not enabled, Gaps are filled with the last higher TF value calculated, which will not produce a repaint Values on historical bars but repaint values realtime.
█ HOW TO USE IT
Load the indicator on an active chart (see the Help Center if you don't know how).
Time period
By default, the script uses an auto-stepping mechanism to adjust the time period of its moving window to the chart's timeframe. The following table shows chart timeframes and the corresponding time period used by the script. When the chart's timeframe is less than or equal to the timeframe in the first column, the second column's time period is used to calculate the Ema Short Long Indicator :
Chart Time
timeframe period
1min 🠆 1H
5min 🠆 4H
1H 🠆 1D
4H 🠆 3D
12H 🠆 1W
1D 🠆 1M
1W 🠆 3M
█ DESCRIPTION
The script begins by setting up the chart indicator with a short title, "ESLI", and enabling it as an overlay. It then initializes several variables for time conversions, to be used later in the script.
The timeStep_translate() function converts the timeframe of the chart into a string representing a larger time interval, based on the number of seconds in the timeframe. The resulting string is used to label the horizontal axis of the chart.
Next, the script defines several input variables that can be modified by the user. These include the colors of the EMA lines and the signals, whether or not the indicator is allowed to repaint (i.e. update past values based on future data), and the number of periods used to calculate the EMA and signal lines.
The f_security() function calls the request.security() function to fetch data from the specified security and timeframe, and is used to calculate the EMA and signal lines using the ta.ema() function. The clo variable is assigned the closing price data, adjusted for repainting and timeframe.
The EMA line is calculated using a weighted average of the EMA over the specified period and two times that period, as well as three times that period, divided by six. The signal line is calculated as the EMA of the EMA line over the specified period.
The col_css variable sets the color of the EMA line based on whether it is currently above or below the signal line. The script then plots the EMA and signal lines, and uses the plotshape() function to indicate long and short signals based on the crossovers and crossunders of the EMA and signal lines.
Finally, the script sets up alert conditions using the alertcondition() function to notify the user when a long or short signal is generated, including information about the symbol and closing price.
█ SPECIAL THANKS
Special thanks to LOXX, I wanted to take a moment to express my gratitude for his valuable input in the EMA calculation. His insights and expertise have greatly helped me in improving my Pine Script coding skills. Thanks to his suggestion, I was able to better understand the EMA formula and implement it effectively in my script.
Your generosity in sharing your knowledge and experience is truly appreciated. It is through collaboration and exchanging ideas that we can all grow and become better in our craft.
This script provides exact signals that, with suitable additional indicators, provide very good results.
Best regards
Chervolino
Selected Dates Filter by @zeusbottradingWe are presenting you feature for strategies in Pine Script.
This function/pine script is about NOT opening trades on selected days. Real usage is for bank holidays or volatile days (PPI, CPI, Interest Rates etc.) in United States and United Kingdom from 2020 to 2030 (10 years of dates of bank holidays in mentioned countries above). Strategy is simple - SMA crossover of two lengts 14 and 28 with close source.
In pine script you can see we picked US and GB bank holidays. If you add this into your strategy, your bot will not open trades on those days. You must make it a rule or a condition. We use it as a rule in opening long/short trades.
You can also add some of your prefered dates, here is just example of our idea. If you want to add your preffered days you can find them on any site like forexfactory, myfxbook and so on. But don’t forget to add function “time_tradingday ! = YourChoosedDate” as it is writen lower in the pine script.
Sometimes the date is substituted for a different day, because the day of the holiday is on Saturday or Sunday.
Made with ❤️ for this community.
If you have any questions or suggestions, let us know.
The script is for informational and educational purposes only. Use of the script does not constitutes professional and/or financial advice. You alone the sole responsibility of evaluating the script output and risks associated with the use of the script. In exchange for using the script, you agree not to hold zeusbottrading TradingView user liable for any possible claim for damages arising from any decision you make based on use of the script.
kNNLibrary "kNN"
Collection of experimental kNN functions. This is a work in progress, an improvement upon my original kNN script:
The script can be recreated with this library. Unlike the original script, that used multiple arrays, this has been reworked with the new Pine Script matrix features.
To make a kNN prediction, the following data should be supplied to the wrapper:
kNN : filter type. Right now either Binary or Percent . Binary works like in the original script: the system stores whether the price has increased (+1) or decreased (-1) since the previous knnStore event (called when either long or short condition is supplied). Percent works the same, but the values stored are the difference of prices in percents. That way larger differences in prices would give higher scores.
k : number k. This is how many nearest neighbors are to be selected (and summed up to get the result).
skew : kNN minimum difference. Normally, the prediction is done with a simple majority of the neighbor votes. If skew is given, then more than a simple majority is needed for a prediction. This also means that there are inputs for which no prediction would be given (if the majority votes are between -skew and +skew). Note that in Percent mode more profitable trades will have higher voting power.
depth : kNN matrix size limit. Originally, the whole available history of trades was used to make a prediction. This not only requires more computational power, but also neglects the fact that the market conditions are changing. This setting restricts the memory matrix to a finite number of past trades.
price : price series
long : long condition. True if the long conditions are met, but filters are not yet applied. For example, in my original script, trades are only made on crossings of fast and slow MAs. So, whenever it is possible to go long, this value is set true. False otherwise.
short : short condition. Same as long , but for short condition.
store : whether the inputs should be stored. Additional filters may be applied to prevent bad trades (for example, trend-based filters), so if you only need to consult kNN without storing the trade, this should be set to false.
feature1 : current value of feature 1. A feature in this case is some kind of data derived from the price. Different features may be used to analyse the price series. For example, oscillator values. Not all of them may be used for kNN prediction. As the current kNN implementation is 2-dimensional, only two features can be used.
feature2 : current value of feature 2.
The wrapper returns a tuple: [ longOK, shortOK ]. This is a pair of filters. When longOK is true, then kNN predicts a long trade may be taken. When shortOK is true, then kNN predicts a short trade may be taken. The kNN filters are returned whenever long or short conditions are met. The trade is supposed to happen when long or short conditions are met and when the kNN filter for the desired direction is true.
Exported functions :
knnStore(knn, p1, p2, src, maxrows)
Store the previous trade; buffer the current one until results are in. Results are binary: up/down
Parameters:
knn : knn matrix
p1 : feature 1 value
p2 : feature 2 value
src : current price
maxrows : limit the matrix size to this number of rows (0 of no limit)
Returns: modified knn matrix
knnStorePercent(knn, p1, p2, src, maxrows)
Store the previous trade; buffer the current one until results are in. Results are in percents
Parameters:
knn : knn matrix
p1 : feature 1 value
p2 : feature 2 value
src : current price
maxrows : limit the matrix size to this number of rows (0 of no limit)
Returns: modified knn matrix
knnGet(distance, result)
Get neighbours by getting k results with the smallest distances
Parameters:
distance : distance array
result : result array
Returns: array slice of k results
knnDistance(knn, p1, p2)
Create a distance array from the two given parameters
Parameters:
knn : knn matrix
p1 : feature 1 value
p2 : feature 2 value
Returns: distance array
knnSum(knn, p1, p2, k)
Make a prediction, finding k nearest neighbours and summing them up
Parameters:
knn : knn matrix
p1 : feature 1 value
p2 : feature 2 value
k : sum k nearest neighbors
Returns: sum of k nearest neighbors
doKNN(kNN, k, skew, depth, price, long, short, store, feature1, feature2)
execute kNN filter
Parameters:
kNN : filter type
k : number k
skew : kNN minimum difference
depth : kNN matrix size limit
price : series
long : long condition
short : short condition
store : store the supplied features (if false, only checks the results without storage)
feature1 : feature 1 value
feature2 : feature 2 value
Returns: filter output
DMI + HMA - No Risk ManagementDMI (Directional Movement Index) and HMA (Hull Moving Average)
The DMI and HMA make a great combination, The DMI will gauge the market direction, while the HMA will add confirmation to the trend strength.
What is the DMI?
The DMI is an indicator that was developed by J. Welles Wilder in 1978. The Indicator was designed to identify in which direction the price is moving. This is done by comparing previous highs and lows and drawing 2 lines.
1. A Positive movement line
2. A Negative movement line
A third line can be added, which would be known as the ADX line or Average Directional Index. This can also be used to gauge the strength in which direction the market is moving.
When the Positive movement line (DI+) is above the Negative movement line (DI-) there is more upward pressure. Ofcourse visa versa, when the DI- is above the DI+ that would indicate more downwards pressure.
Want to know more about HMA? Check out one of our other published scripts
What is this strategy doing?
We are first waiting for the DMI to cross in our favoured direction, after that, we wait for the HMA to signal the entry. Without both conditions being true, no trade will be made.
Long Entries
1. DI+ crosses above DI-
2. HMA line 1 is above HMA line 2
Short Entries
1. DI- Crosses above DI+
2. HMA line 1 is below HMA lilne 2
Its as simple as that.
Conclusion
While this strategy does have its downsides, that can be reduced by adding some risk manegment into the script. In general the trade profitability is above average, And the max drawdown is at a minimum.
The settings have been optimised to suite BTCUSDT PERP markets. Though with small adjustments it can be used on many assets!
Strat AssistantStrat Assistant
This script will help you follow the strat. While other collections of scripts exist to do similar functionality, the idea of this (work in progress) is to be a one stop shop for all things strat that will evolve over time. Fairly new to the strat and pine script. The script is for informational purposes only. Please do you due diligence.
Features:
=Candle numbering: will number candles underneath based on the prior candle. 1 for an inside bar 2 for a directional bar (up or down) and 3 for an outside bar.
=Candle coloring: will highlight candles. Yellow for an inside candle, magenta for an outside candle, red for a 2 down candle, green for a 2 up candle. It will not modify the outside border of the candle so you can still see green if the open was lower than the close or red if the close was below open.
=Candle shape: will place an arrow up if the 2 candle is a directional UP and arrows down if the 2 candle is a directional DOWN. It will display red if it's bearish and green if it's bullish.
=Strat combos: will provide a text description of all currently applicable strat combinations if they are active at the top right of the chart. It will display red if it's bearish and green if it's bullish.
=Actionable signals: will provide text description of actionable signals if they are active on the bottom right of the chart. Inside bar if the bar is inside the prior bar, the color of this signal will be blue (shows better on white background). Hammer will be 75% of the candle is at the bottom and the open and close are above the 75% of the wick. Hammers will display green for bullish. Shooters are just the opposite of hammers, 75% of the wick is at the top and the open and close are below 75% of the wick. Shooters will display at red for bearish.
=Time Frame Continuity: will provide time frame continuity across 15m, 30m, Hour, Day, Week, Quarter, Year with green arrows up if the close is above the open for the given time frame, or red arrows down if the close is below the open for the given time frame. This will also look to determine if the time frame is applicable based on what time frame the user selects as well as ensures history exists for the given time frame.
Backlog / Work in progress:
=Opacity for time frame continuity
=Line indicators (or maybe just a label) for highs and lows of previous periods (hour, day, week, quarter)
=Alert conditions
=User input for various indicators
McGinley Dynamic (Improved) - John R. McGinley, Jr.For all the McGinley enthusiasts out there, this is my improved version of the "McGinley Dynamic", originally formulated and publicized in 1990 by John R. McGinley, Jr. Prior to this release, I recently had an encounter with a member request regarding the reliability and stability of the general algorithm. Years ago, I attempted to discover the root of it's inconsistency, but success was not possible until now. Being no stranger to a good old fashioned computational crisis, I revisited it with considerable contemplation.
I discovered a lack of constraints in the formulation that either caused the algorithm to implode to near zero and zero OR it could explosively enlarge to near infinite values during unusual price action volatility conditions, occurring on different time frames. A numeric E-notation in a moving average doesn't mean a stock just shot up in excess of a few quintillion in value from just "10ish" moments ago. Anyone experienced with the usual McGinley Dynamic, has probably encountered this with dynamically dramatic surprises in their chart, destroying it's usability.
Well, I believe I have found an answer to this dilemma of 'susceptibility to miscalculation', to provide what is most likely McGinley's whole hearted intention. It required upgrading the formulation with two constraints applied to it using min/max() functions. Let me explain why below.
When using base numbers with an exponent to the power of four, some miniature numbers smaller than one can numerically collapse to near 0 values, or even 0.0 itself. A denominator of zero will always give any computational device a horribly bad day, not to mention the developer. Let this be an EASY lesson in computational division, I often entertainingly express to others. You have heard the terminology "$#|T happens!🙂" right? In the programming realm, "AnyNumber/0.0 CAN happen!🤪" too, and it happens "A LOT" unexpectedly, even when it's highly improbable. On the other hand, numbers a bit larger than 2 with the power of four can tremendously expand rapidly to the numeric limits of 64-bit processing, generating ginormous spikes on a chart.
The ephemeral presence of one OR both of those potentials now has a combined satisfactory remedy, AND you as TV members now have it, endowed with the ever evolving "Power of Pine". Oh yeah, this one plots from bar_index==0 too. It also has experimental settings tweaks to play with, that may reveal untapped potential of this formulation. This function now has gain of function capabilities, NOT to be confused with viral gain of function enhancements from reckless BSL-4 leaking laboratories that need to be eternally abolished from this planet. Although, I do have hopes this imd() function has the potential to go viral. I believe this improved function may have utility in the future by developers of the TradingView community. You have the source, and use it wisely...
I included an generic ema() plot for a basic comparison, ultimately unveiling some of this algorithm's unique characteristics differing on a variety of time frames. Also another unconstrained function is included to display some the disparities of having no limitations on a divisor in the calculation. I strongly advise against the use of umd() in any published script. There is simply just no reason to even ponder using it. I also included notes in the script to warn against this. It's funny now, but some folks don't always read/understand my advisories... You have been warned!
NOTICE: You have absolute freedom to use this source code any way you see fit within your new Pine projects, and that includes TV themselves. You don't have to ask for my permission to reuse this improved function in your published scripts, simply because I have better things to do than answer requests for the reuse of this simplistic imd() function. Sufficient accreditation regarding this script and compliance with "TV's House Rules" regarding code reuse, is as easy as copying the entire function as is. Fair enough? Good! I have a backlog of "computational crises" to contend with, including another one during the writing of this elaborate description.
When available time provides itself, I will consider your inquiries, thoughts, and concepts presented below in the comments section, should you have any questions or comments regarding this indicator. When my indicators achieve more prevalent use by TV members, I may implement more ideas when they present themselves as worthy additions. Have a profitable future everyone!
Value at Risk [OmegaTools]The "Value at Risk" (VaR) indicator is a powerful financial risk management tool that helps traders estimate the potential losses in a portfolio over a specified period of time, given a certain level of confidence. VaR is widely used by financial institutions, traders, and risk managers to assess the probability of portfolio losses in both normal and volatile market conditions. This TradingView script implements a comprehensive VaR calculation using several models, allowing users to visualize different risk scenarios and adjust their trading strategies accordingly.
Concept of Value at Risk
Value at Risk (VaR) is a statistical technique used to measure the likelihood of losses in a portfolio or financial asset due to market risks. In essence, it answers the question: "What is the maximum potential loss that could occur in a given portfolio over a specific time horizon, with a certain confidence level?" For instance, if a portfolio has a one-day 95% VaR of $10,000, it means that there is a 95% chance the portfolio will not lose more than $10,000 in a single day. Conversely, there is a 5% chance of losing more than $10,000. VaR is a key risk management tool for portfolio managers and traders because it quantifies potential losses in monetary terms, allowing for better-informed decision-making.
There are several ways to calculate VaR, and this indicator script incorporates three of the most commonly used models:
Historical VaR: This approach uses historical returns to estimate potential losses. It is based purely on past price data, assuming that the past distribution of returns is indicative of future risks.
Variance-Covariance VaR: This model assumes that asset returns follow a normal distribution and that the risk can be summarized using the mean and standard deviation of past returns. It is a parametric method that is widely used in financial risk management.
Exponentially Weighted Moving Average (EWMA) VaR: In this model, recent data points are given more weight than older data. This dynamic approach allows the VaR estimation to react more quickly to changes in market volatility, which is particularly useful during periods of market stress. This model uses the Exponential Weighted Moving Average Volatility Model.
How the Script Works
The script starts by offering users a set of customizable input settings. The first input allows the user to choose between two main calculation modes: "All" or "OCT" (Only Current Timeframe). In the "All" mode, the script calculates VaR using all available methodologies—Historical, Variance-Covariance, and EWMA—providing a comprehensive risk overview. The "OCT" mode narrows the calculation to the current timeframe, which can be particularly useful for intraday traders who need a more focused view of risk.
The next input is the lookback window, which defines the number of historical periods used to calculate VaR. Commonly used lookback periods include 21 days (approximately one month), 63 days (about three months), and 252 days (roughly one year), with the script supporting up to 504 days for more extended historical analysis. A longer lookback period provides a more comprehensive picture of risk but may be less responsive to recent market conditions.
The confidence level is another important setting in the script. This represents the probability that the loss will not exceed the VaR estimate. Standard confidence levels are 90%, 95%, and 99%. A higher confidence level results in a more conservative risk estimate, meaning that the calculated VaR will reflect a more extreme loss scenario.
In addition to these core settings, the script allows users to customize the visual appearance of the indicator. For example, traders can choose different colors for "Bullish" (Risk On), "Bearish" (Risk Off), and "Neutral" phases, as well as colors for highlighting "Breaks" in the data, where returns exceed the calculated VaR. These visual cues make it easy to identify periods of heightened risk at a glance.
The actual VaR calculation is broken down into several models, starting with the Historical VaR calculation. This is done by computing the logarithmic returns of the asset's closing prices and then using linear interpolation to determine the percentile corresponding to the desired confidence level. This percentile represents the potential loss in the asset over the lookback period.
Next, the script calculates Variance-Covariance VaR using the mean and standard deviation of the historical returns. The standard deviation is multiplied by a z-score corresponding to the chosen confidence level (e.g., 1.645 for 95% confidence), and the resulting value is subtracted from the mean return to arrive at the VaR estimate.
The EWMA VaR model uses the EWMA for the sigma parameter, the standard deviation, obtaining a specific dynamic in the volatility. It is particularly useful in volatile markets where recent price behavior is more indicative of future risk than older data.
For traders interested in intraday risk management, the script provides several methods to adjust VaR calculations for lower timeframes. By using intraday returns and scaling them according to the chosen timeframe, the script provides a dynamic view of risk throughout the trading day. This is especially important for short-term traders who need to manage their exposure during high-volatility periods within the same day. The script also incorporates an EWMA model for intraday data, which gives greater weight to the most recent intraday price movements.
In addition to calculating VaR, the script also attempts to detect periods where the asset's returns exceed the estimated VaR threshold, referred to as "Breaks." When the returns breach the VaR limit, the script highlights these instances on the chart, allowing traders to quickly identify periods of extreme risk. The script also calculates the average of these breaks and displays it for comparison, helping traders understand how frequently these high-risk periods occur.
The script further visualizes the risk scenario using a risk phase classification system. Depending on the level of risk, the script categorizes the market as either "Risk On," "Risk Off," or "Risk Neutral." In "Risk On" mode, the market is considered bullish, and the indicator displays a green background. In "Risk Off" mode, the market is bearish, and the background turns red. If the market is neither strongly bullish nor bearish, the background turns neutral, signaling a balanced risk environment.
Traders can customize whether they want to see this risk phase background, along with toggling the display of the various VaR models, the intraday methods, and the break signals. This flexibility allows traders to tailor the indicator to their specific needs, whether they are day traders looking for quick intraday insights or longer-term investors focused on historical risk analysis.
The "Risk On" and "Risk Off" phases calculated by this Value at Risk (VaR) script introduce a novel approach to market risk assessment, offering traders an advanced toolset to gauge market sentiment and potential risk levels dynamically. These risk phases are built on a combination of traditional VaR methodologies and proprietary logic to create a more responsive and intuitive way to manage exposure in both normal and volatile market conditions. This method of classifying market conditions into "Risk On," "Risk Off," or "Risk Neutral" is not something that has been traditionally associated with VaR, making it a groundbreaking addition to this indicator.
How the "Risk On" and "Risk Off" Phases Are Calculated
In typical VaR implementations, the focus is on calculating the potential losses at a given confidence level without providing an overall market outlook. This script, however, introduces a unique risk classification system that takes the output of various VaR models and translates it into actionable signals for traders, marking whether the market is in a Risk On, Risk Off, or Risk Neutral phase.
The Risk On and Risk Off phases are primarily determined by comparing the current returns of the asset to the average VaR calculated across several different methods, including Historical VaR, Variance-Covariance VaR, and EWMA VaR. Here's how the process works:
1. Threshold Setting and Effect Calculation: The script first computes the average VaR using the selected models. It then checks whether the current returns (expressed as a negative value to signify loss) exceed the average VaR value. If the current returns surpass the calculated VaR threshold, this indicates that the actual market risk is higher than expected, signaling a potential shift in market conditions.
2. Break Analysis: In addition to monitoring whether returns exceed the average VaR, the script counts the number of instances within the lookback period where this breach occurs. This is referred to as the "break effect." For each period in the lookback window, the script checks whether the returns surpass the calculated VaR threshold and increments a counter. The percentage of periods where this breach occurs is then calculated as the "effect" or break percentage.
3. Dual Effect Check (if "Double" Risk Scenario is selected): When the user chooses the "Double" risk scenario mode, the script performs two layers of analysis. First, it calculates the effect of returns exceeding the VaR threshold for the current timeframe. Then, it calculates the effect for the lower intraday timeframe as well. Both effects are compared to the user-defined confidence level (e.g., 95%). If both effects exceed the confidence level, the market is deemed to be in a high-risk situation, thus triggering a Risk Off phase. If both effects fall below the confidence level, the market is classified as Risk On.
4. Risk Phases Determination: The final risk phase is determined by analyzing these effects in relation to the confidence level:
- Risk On: If the calculated effect of breaks is lower than the confidence level (e.g., fewer than 5% of periods show returns exceeding the VaR threshold for a 95% confidence level), the market is considered to be in a relatively safe state, and the script signals a "Risk On" phase. This is indicative of bullish conditions where the potential for extreme loss is minimal.
- Risk Off: If the break effect exceeds the confidence level (e.g., more than 5% of periods show returns breaching the VaR threshold), the market is deemed to be in a high-risk state, and the script signals a "Risk Off" phase. This indicates bearish market conditions where the likelihood of significant losses is higher.
- Risk Neutral: If the break effect hovers near the confidence level or if there is no clear trend indicating a shift toward either extreme, the market is classified as "Risk Neutral." In this phase, neither bulls nor bears are dominant, and traders should remain cautious.
The phase color that the script uses helps visualize these risk phases. The background will turn green in Risk On conditions, red in Risk Off conditions, and gray in Risk Neutral phases, providing immediate visual feedback on market risk. In addition to this, when the "Double" risk scenario is selected, the background will only turn green or red if both the current and intraday timeframes confirm the respective risk phase. This double-checking process ensures that traders are only given a strong signal when both longer-term and short-term risks align, reducing the likelihood of false signals.
A New Way of Using Value at Risk
This innovative Risk On/Risk Off classification, based on the interaction between VaR thresholds and market returns, represents a significant departure from the traditional use of Value at Risk as a pure risk measurement tool. Typically, VaR is employed as a backward-looking measure of risk, providing a static estimate of potential losses over a given timeframe with no immediate actionable feedback on current market conditions. This script, however, dynamically interprets VaR results to create a forward-looking, real-time signal that informs traders whether they are operating in a favorable (Risk On) or unfavorable (Risk Off) environment.
By incorporating the "break effect" analysis and allowing users to view the VaR breaches as a percentage of past occurrences, the script adds a predictive element that can be used to time market entries and exits more effectively. This **dual-layer risk analysis**, particularly when using the "Double" scenario mode, adds further granularity by considering both current timeframe and intraday risks. Traders can therefore make more informed decisions not just based on historical risk data, but on how the market is behaving in real-time relative to those risk benchmarks.
This approach transforms the VaR indicator from a risk monitoring tool into a decision-making system that helps identify favorable trading opportunities while alerting users to potential market downturns. It provides a more holistic view of market conditions by combining both statistical risk measurement and intuitive phase-based market analysis. This level of integration between VaR methodologies and real-time signal generation has not been widely seen in the world of trading indicators, marking this script as a cutting-edge tool for risk management and market sentiment analysis.
I would like to express my sincere gratitude to @skewedzeta for his invaluable contribution to the final script. From generating fresh ideas to applying his expertise in reviewing the formula, his support has been instrumental in refining the outcome.
Johnny's Moving Average RibbonProps to Madrid for creating the original script: Madrid Moving Average Ribbon.
All I did was upgrade it to pinescript v5 and added a few changes to the script.
Features and Functionality
Moving Average Types: The indicator offers a choice between exponential moving averages (EMAs) and simple moving averages (SMAs), allowing users to select the type that best fits their trading strategy.
Dynamic Color Coding: Each moving average line within the ribbon changes color based on its direction and position relative to a reference moving average, providing visual cues for market sentiment and trend strength.
Lime Green: Indicates an uptrend and potential long positions, shown when a moving average is rising and above the longer-term reference MA.
Maroon: Suggests caution for long positions or potential short reentry points, displayed when a moving average is rising but below the reference MA.
Ruby Red: Represents a downtrend, suitable for short positions, shown when a moving average is falling and below the reference MA.
Green: Signals potential reentry points for downtrends or warnings for uptrend reversals, displayed when a moving average is falling but above the reference MA.
Usage and Application
Trend Identification: Traders can quickly ascertain the market's direction at a glance by observing the predominant color of the ribbon and its orientation.
Trade Entry and Exit Points: The color transitions within the ribbon can signal potential entry or exit points, with changes from green to lime or red to maroon indicating shifts in market momentum.
Customization: Users have the flexibility to toggle between exponential and simple moving averages, allowing for a tailored analytical approach that aligns with their individual trading preferences.
Technical Specifications
The ribbon consists of multiple moving averages calculated over different periods, typically ranging from shorter to longer-term intervals to capture various aspects of market behavior.
The color dynamics are determined by comparing each moving average to a reference point, often a longer-term moving average within the ribbon, to assess the relative trend strength and direction.
Liquidity Heatmap [BigBeluga]The Liquidity Heatmap is an indicator designed to spot possible resting liquidity or potential stop loss using volume or Open interest.
The Open interest is the total number of outstanding derivative contracts for an asset—such as options or futures—that have not been settled. Open interest keeps track of every open position in a particular contract rather than tracking the total volume traded.
The Volume is the total quantity of shares or contracts traded for the current timeframe.
🔶 HOW IT WORKS
Based on the user choice between Volume or OI, the idea is the same for both.
On each candle, we add the data (volume or OI) below or above (long or short) that should be the hypothetical liquidation levels; More color of the liquidity level = more reaction when the price goes through it.
Gradient color is calculated between an average of 2 points that the user can select. For example: 500, and the script will take the average of the highest data between 500 and 250 (half of the user's choice), and the gradient will be based on that.
If we take volume as an example, a big volume spike will mean a lot of long or short activity in that candle. A liquidity level will be displayed below/above the set leverage (4.5 = 20x leverage as an example) so when the price revisits that zone, all the 20x leverage should be liquidated.
Huge volume = a lot of activity
Huge OI = a lot of positions opened
More volume / OI will result in a stronger color that will generate a stronger reaction.
🔶 ROUTE
Here's an example of a route for long liquidity:
Enable the filter = consider only green candles.
Set the leverage to 4.5 (20x).
Choose Data = Volume.
Process:
A green candle is formed.
A liquidity level is established.
The level is placed below to simulate the 20x leverage.
Color is applied, considering the average volume within the chosen area.
Route completed.
🔶 FEATURE
Possibility to change the color of both long and short liquidity
Manual opacity value
Manual opacity average
Leverage
Autopilot - set a good average automatically of the opacity value
Enable both long or short liquidity visualization
Filtering - grab only red/green candle of the corresponding side or grab every candle
Data - nzVolume - Volume - nzOI - OI
🔶 TIPS
Since the limit of the line is 500, it's best to plot 2 scripts: one with only long and another with only short.
🔶 CONCLUSION
The liquidity levels are an interesting way to think about possible levels, and those are not real levels.
Goertzel Cycle Composite Wave [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Cycle Composite Wave indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
*** To decrease the load time of this indicator, only XX many bars back will render to the chart. You can control this value with the setting "Number of Bars to Render". This doesn't have anything to do with repainting or the indicator being endpointed***
█ Brief Overview of the Goertzel Cycle Composite Wave
The Goertzel Cycle Composite Wave is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The Goertzel Cycle Composite Wave is considered a non-repainting and endpointed indicator. This means that once a value has been calculated for a specific bar, that value will not change in subsequent bars, and the indicator is designed to have a clear start and end point. This is an important characteristic for indicators used in technical analysis, as it allows traders to make informed decisions based on historical data without the risk of hindsight bias or future changes in the indicator's values. This means traders can use this indicator trading purposes.
The repainting version of this indicator with forecasting, cycle selection/elimination options, and data output table can be found here:
Goertzel Browser
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the cycles. The color of the lines indicates whether the wave is increasing or decreasing.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast: These inputs define the window size for the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Cycle Composite Wave Code
The Goertzel Cycle Composite Wave code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Cycle Composite Wave function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past sizes (WindowSizePast), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Cycle Composite Wave algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Cycle Composite Wave code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Cycle Composite Wave code calculates the waveform of the significant cycles for specified time windows. The windows are defined by the WindowSizePast parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in a matrix:
The calculated waveforms for the cycle is stored in the matrix - goeWorkPast. This matrix holds the waveforms for the specified time windows. Each row in the matrix represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Cycle Composite Wave function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Cycle Composite Wave code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Cycle Composite Wave's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for specified time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast:
The WindowSizePast is updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
The matrix goeWorkPast is initialized to store the Goertzel results for specified time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for waveforms:
The goertzel array is initialized to store the endpoint Goertzel.
Calculating composite waveform (goertzel array):
The composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Drawing composite waveform (pvlines):
The composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms and visualizes them on the chart using colored lines.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
Limited applicability:
The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Cycle Composite Wave indicator can be interpreted by analyzing the plotted lines. The indicator plots two lines: composite waves. The composite wave represents the composite wave of the price data.
The composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend.
Interpreting the Goertzel Cycle Composite Wave indicator involves identifying the trend of the composite wave lines and matching them with the corresponding bullish or bearish color.
█ Conclusion
The Goertzel Cycle Composite Wave indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Cycle Composite Wave indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Cycle Composite Wave indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Laguerre Multi-Filter [DW]This is an experimental study designed to identify underlying price activity using a series of Laguerre Filters.
Two different modes are included within this script:
-Ribbon Mode - A ribbon of 18 Laguerre Filters with separate Gamma values is calculated.
-Band Mode - An average of the 18 filters generates the basis line. Then, Golden Mean ATR over the specified sampling period multiplied by 1 and 2 are added and subtracted to the basis line to generate the bands.
Multi-Timeframe functionality is included. You can choose any timeframe that TradingView supports as the basis resolution for the script.
Custom bar colors are included. Bar colors are based on the direction of any of the 18 filters, or the average filter's direction in Ribbon Mode. In Band Mode, the colors are based solely on the average filter's direction.
Buy&Sell Hollow CandlesThe Hollow Candles Script is a type of candlestick analysis script designed to highlight the following:
Purpose of the Script: This script provides the user with buy and sell signals based on candlesticks that show an upward or downward reversal.
Mechanism of the Script: When a hollow (unfilled) red candle appears, it signals a potential entry, provided that this candle is at a low point, following a series of red candles with higher volume than previous days. Similarly, it gives a sell signal when a green candle appears at a peak with high sell volume surpassing that of prior days. However, the appearance of these candles alone should not prompt an immediate buy or sell; you should wait for a confirming candle to validate the signal.
Sideways Movement Caution: If these signals appear during a sideways or flat trend, it is not advisable to proceed with buying or selling.
Chart Insights: The chart demonstrates certain buy and sell operations along with some non-ideal signals where decision-making should be based on fundamental analytical experience.
Jason's Simple Moving Averages WaveUnderstanding the Script:
Purpose: This script identifies potential trend direction and momentum using a moving average and wave amplitude calculation. It shows a green line when the price is trending upwards and a red line when trending downwards.
Strategy: This script doesn't provide a complete trading strategy. It's an indicator designed to be used alongside other tools.
Parameters: You can adjust the "Moving Average Length" input to change the sensitivity of the indicator. A shorter length will react quicker to price changes, while a longer length will be smoother but less responsive.
How to Use it:
Load the Script: In TradingView, navigate to the indicator creation section and paste the provided script code.
Adjust Parameters: Set the "Moving Average Length" based on your preferred timeframe and trading style.
Combine with Other Tools: Use the indicator along with other technical indicators or price action analysis to confirm potential entry and exit points for trades.
Here are some additional points to consider:
Crossovers: You could look for buy signals when the price crosses above the green line and sell signals when it crosses below the red line. However, these can be prone to false signals.
Divergence: Look for divergences between the price movement and the wave indicator. For example, a rising price with a falling wave could indicate overbought conditions and a potential reversal.
Confirmation: Don't rely solely on this indicator. Use it alongside other confirmations from price action, volume analysis, or other indicators to identify higher probability trades.
Important Note:
Smart Money Analysis with Golden/Death Cross [YourTradingSensei]Description of the script "Smart Money Analysis with Golden/Death Cross":
This TradingView script is designed for market analysis based on the concept of "Smart Money" and includes the detection of Golden Cross and Death Cross signals.
Key features of the script:
Moving Averages (SMA):
Two moving averages are calculated: a short-term (50 periods) and a long-term (200 periods).
The intersections of these moving averages are used to determine Golden Cross and Death Cross signals.
High Volume:
The current trading volume is analyzed.
Periods of high volume are identified when the current volume exceeds the average volume by a specified multiplier.
Support and Resistance Levels:
Key support and resistance levels are determined based on the highest and lowest prices over a specified period.
Buy and Sell Signals:
Buy and sell signals are generated based on moving average crossovers, high volume, and the closing price relative to key levels.
Golden Cross and Death Cross:
A Golden Cross occurs when the short-term moving average crosses above the long-term moving average.
A Death Cross occurs when the short-term moving average crosses below the long-term moving average.
These signals are displayed on the chart with text color changes for better visualization.
Using the script:
The script helps traders visualize key signals and levels, aiding in making informed trading decisions based on the behavior of major market players and technical analysis.
Custom candle lighting(CCL) © 2024 by YourTradingSensei is licensed under CC BY-NC-SA 4.0. To view a copy of this license.
Monthly Performance Table by Dr. MauryaWhat is this ?
This Strategy script is not aim to produce strategy results but It aim to produce monthly PnL performance Calendar table which is useful for TradingView community to generate a monthly performance table for Own strategy.
So make sure to read the disclaimer below.
Why it is required to publish?:
I am not satisfied with the monthly performance available on TV community script. Sometimes it is very lengthy in code and sometimes it showing the wrong PNL for current month.
So I have decided to develop new Monthly performance or return in value as well as in percentage with highly flexible to adjust row automatically.
Features :
Accuracy increased for current month PnL.
There are 14 columns and automatically adjusted rows according to available trade years/month.
First Column reflect the YEAR, from second column to 13 column reflect the month and 14 column reflect the yearly PnL.
In tabulated data reflects the monthly PnL (value and (%)) in month column and Yearly PnL (value and (%)) in Yearly column.
Various color input also added to change the table look like background color, text color, heading text color, border color.
In tabulated data, background color turn green for profit and red for loss.
Copy from line 54 to last line as it is in your strategy script.
Credit: This code is modified and top up of the open-source code originally written by QuantNomad. Thanks for their contribution towards to give base and lead to other developers. I have changed the way of determining past PnL to array form and keep separated current month and year PnL from array. Which avoid the false pnl in current month.
Strategy description:
As in first line I said This strategy is aim to provide monthly performance table not focused on the strategy. But it is necessary to explain strategy which I have used here. Strategy is simply based on ADX available on TV community script. Long entry is based on when the difference between DIPlus and ADX is reached on certain value (Set value in Long difference in Input Tab) while Short entry is based on when the difference between DIMinus and ADX is reached on certain value (Set value in Short difference in Input Tab).
Default Strategy Properties used on chart(Important)
This script backtest is done on 1 hour timeframe of NSE:Reliance Inds Future cahrt, using the following backtesting properties:
Balance (default): 500 000 (default base currency)
Order Size: 1 contract
Comission: 20 INR per Order
Slippage: 5 tick
Default setting in Input tab
Len (ADX length) : 14
Th (ADX Threshhold): 20
Long Difference (DIPlus - ADX) = 5
Short Difference (DIMinus - ADX) = 5
We use these properties to ensure a realistic preview of the backtesting system, do note that default properties can be different for various reasons described below:
Order Size: 1 contract by default, this is to allow the strategy to run properly on most instruments such as futures.
Comission: Comission can vary depending on the market and instrument, there is no default value that might return realistic results.
We strongly recommend all users to ensure they adjust the Properties within the script settings to be in line with their accounts & trading platforms of choice to ensure results from the strategies built are realistic.
Disclaimer:
This script not provide indicative of any future results.
This script don’t provide any financial advice.
This strategy is only for the readymade snippet code for monthly PnL performance calender table for any own strategy.
Liquidations Meter [LuxAlgo]The Liquidation Meter aims to gauge the momentum of the bar, identify the strength of the bulls and bears, and more importantly identify probable exhaustion/reversals by measuring probable liquidations.
🔶 USAGE
This tool includes many features related to the concept of liquidation. The two core ones are the liquidation meter and liquidation price calculator, highlighted below.
🔹 Liquidation Meter
The liquidation meter presents liquidations on the price chart by measuring the highest leverage value of longs and shorts that have been potentially liquidated on the last chart bar, hence allowing traders to:
gauge the momentum of the bar.
identify the strength of the bulls and bears.
identify probable reversal/exhaustion points.
Liquidation of low-leveraged positions can be indicative of exhaustion.
🔹 Liquidation Price Calculator
A liquidation price calculator might come in handy when you need to calculate at what price level your leveraged position in Crypto, Forex, Stocks, or any other asset class gets liquidated to add a protective stop to mitigate risk. Monitoring an open position gets easier if the trader can calculate the total risk in order for them to choose the right amount of margin and leverage.
Liquidation price is the distance from the trader's entry price to the price where trader's leveraged position gets liquidated due to a loss. As the leverage is increased, the distance from trader's entry price to the liquidation price shrinks.
While you have one or several trades open you can quickly check their liquidation levels and determine which one of the trades is closest to their liquidation price.
If you are a day trader that uses leverage and you want to know which trade has the best outlook you can calculate the liquidation price to see which one of the trades looks best.
🔹 Dashboard
The bar statistics option enables measuring and presenting trading activity, volatility, and probable liquidations for the last chart bar.
🔶 DETAILS
It's important to note that liquidation price calculator tool uses a formula to calculate the liquidation price based on the entry price + leverage ratio.
Other factors such as leveraged fees, position size, and other interest payments have been excluded since they are variables that don’t directly affect the level of liquidation of a leveraged position.
The calculator also assumes that traders are using an isolated margin for one single position and does not take into consideration the additional margin they might have in their account.
🔹Liquidation price formula
the liquidation distance in percentage = 100 / leverage ratio
the liquidation distance in price = current asset price x the liquidation distance in percentage
the liquidation price (longs) = current asset price – the liquidation distance in price
the liquidation price (shorts) = current asset price + the liquidation distance in price
or simply
the liquidation price (longs) = entry price * (1 – 1 / leverage ratio)
the liquidation price (shorts) = entry price * (1 + 1 / leverage ratio)
Example:
Let’s say that you are trading a leverage ratio of 1:20. The first step is to calculate the distance to your liquidation point in percentage.
the liquidation distance in percentage = 100 / 20 = 5%
Now you know that your liquidation price is 5% away from your entry price. Let's calculate 5% below and above the entry price of the asset you are currently trading. As an example, we assume that you are trading bitcoin which is currently priced at $35000.
the liquidation distance in price = $35000 x 0.05 = $1750
Finally, calculate liquidation prices.
the liquidation price (longs) = $35000 – $1750 = $33250
the liquidation price (short) = $35000 + $1750 = $36750
In this example, short liquidation price is $36750 and long liquidation price is $33250.
🔹How leverage ratio affects the liquidation price
The entry price is the starting point of the calculation and it is from here that the liquidation price is calculated, where the leverage ratio has a direct impact on the liquidation price since the more you borrow the less “wiggle-room” your trade has.
An increase in leverage will subsequently reduce the distance to full liquidation. On the contrary, choosing a lower leverage ratio will give the position more room to move on.
🔶 SETTINGS
🔹Liquidations Meter
Base Price: The option where to set the reference/base price.
🔹Liquidation Price Calculator
Liquidation Price Calculator: Toggles the visibility of the calculator. Details and assumptions made during the calculations are stated in the tooltip of the option.
Entry Price: The option where to set the entry price, a value of 0 will use the current closing price. Details are given in the tooltip of the option.
Leverage: The option where to set the leverage value.
Show Calculated Liquidation Prices on the Chart: Toggles the visibility of the liquidation prices on the price chart.
🔹Dashboard
Show Bar Statistics: Toggles the visibility of the last bar statistics.
🔹Others
Liquidations Meter Text Size: Liquidations Meter text size.
Liquidations Meter Offset: Liquidations Meter offset.
Dashboard/Calculator Placement: Dashboard/calculator position on the chart.
Dashboard/Calculator Text Size: Dashboard text size.
🔶 RELATED SCRIPTS
Here are some of the scripts that are related to the liquidation and liquidity concept, for more and other conceptual scripts you are kindly invited to visit LuxAlgo-Scripts .
Liquidation-Levels
Liquidations-Real-Time
Buyside-Sellside-Liquidity
[blackcat] L1 T3 MA Lite Version
Tilson T3 Moving Average (T3MA) is a type of moving average line designed to reduce lag and improve the accuracy of trend identification. It is based on a combination of multiple smoothed moving averages, with each subsequent smoothed moving average having a higher weight than the previous one. The T3MA formula includes three different smoothing coefficients and a volume coefficient or volatility coefficient, which can be adjusted according to user preferences. T3MA is commonly used by traders and investors to identify trends and generate trading signals.
The calculation method for T3MA requires the use of exponential moving averages (EMA). In Pine scripts in the TradingView community, over 90% of them use the EMA function to calculate T3MA. Specifically, in Pine scripts, it is necessary to define the length and volatility coefficient of T3MA, then calculate three different lengths of EMA separately. Next, three constants need to be calculated that are related to volatility. Finally, the weighted average value of the three EMAs and three constants is added together to obtain the value of T3MA. If you want to customize the length and volatility of T3MA, you just need to modify the parameters in the code. Overall, T3MA is a very useful technical indicator that can help traders better understand market trends and improve trading efficiency.
The improved version introduced today mainly addresses my perception that traditional T3 algorithms are too redundant with high computational complexity leading to delayed reactions. Therefore, I have developed a lightweight version called L1 T3 MA Lite Version. This doesn't bring about any qualitative changes; it simply makes adjustments in terms of computational resources and response speed. To illustrate its advantages compared with traditional T3 MA indicators, I will provide a comparison using Everget's script from TradingView community blogger everget.
The difference between these two scripts for calculating T3 Moving Average lies in their implementation methods. The first script (Everget) uses a more complex calculation formula, which requires calculating three different lengths of EMA and computing three constants based on volatility. Finally, they are weighted averaged to obtain T3MA. This complex calculation formula can enhance the sensitivity of the T3MA indicator, thereby better identifying price trends. On the other hand, the second script (Blackcat1402) uses a relatively simple calculation formula that only requires calculating three different lengths of EMA and computing three constants based on volatility. Finally, they are weighted averaged to obtain T3MA as well. This simple calculation formula reduces computational complexity and speeds up calculations. Both have slightly different effects and calculation methods; users can choose the script that suits their needs.
In summary, T3 Moving Average is a very useful technical indicator that can help traders better understand market trends and improve trading efficiency. Users can choose scripts suitable for themselves according to their needs and flexibly adjust the length and volatility coefficient of T3MA to adapt to different markets.
BUY/SELL + ADVANCE DECLINEThis script is a custom trading view indicator that helps to identify potential buy and sell signals based on the RSI (Relative Strength Index) and SMA (Simple Moving Average) indicators. The script also identifies potential reversals using a combination of RSI and price action. It plots buy, sell, and reversal signals on the chart along with an SMA line. Additionally, it provides alerts based on the buy, sell, and reversal conditions.
Changes made to the original script:
Fixed the undeclared identifier 'c' error by calculating the difference between the current closing price and the previous closing price: c = close - close .
Added an "ADD Value Floating Label" to the chart. The label shows the difference between the current and previous closing prices (ADD value) along with a "Bullish" or "Bearish" indicator based on the value of 'c'. The label is positioned at the top right of the visible chart area and remains static.
Here's a summary of the major components of the script:
Input settings: Define the input parameters for RSI and SMA.
Calculation of RSI and SMA: Compute the RSI and SMA values based on the input parameters.
Color definitions: Define colors for different conditions and levels.
Condition definitions: Define various conditions for buy, sell, reversal, and other criteria.
Buy and sell conditions: Determine buy and sell signals based on RSI, SMA, and price action.
Reversal conditions: Identify potential reversals using RSI and price action.
Plot signals: Display buy, sell, and reversal signals on the chart.
Bar colors: Color the bars based on the identified signals.
Plot SMA: Display the SMA line on the chart.
Alert conditions: Set up alerts for buy, sell, and reversal conditions.
ADD Value Floating Label: Add a label to the chart showing the ADD value and a "Bullish" or "Bearish" indicator.
Crossover Alerts for Yesterday O/H/L/C , Today Vwap [Zero54]This is a very simple script/indicator that trigger alerts every time the script triggers the following conditions.
1) Script crosses yesterday's (previous day's) high
2) Script crosses yesterday's (previous day's) low
3) Script crosses yesterday's (previous day's) open
4) Script crosses yesterday's (previous day's) close
5) Script crosses today's vwap.
I developed this to keep track of the scripts I follow and I find it useful. Hope you will find it useful too.
Steps to use:
1) Open the ticker for which you want to set the alerts.
2) Add this indicator to the chart.
3) Right Click on the text and set choose "Add Alert"
4) After you have done with setting up the alert, feel free to remove the indicator from the chart. It is not necessary for the indicator to be added in the chart in order for it to work.
5) Repeat 1-4 for all the scripts for which you want to set the alerts.
Be advised: During market open, if you have set alerts for multiple scripts, a tsunami of alerts may be triggered.
If you like this alert indicator, please like/boost it. Feel free to re-use this code however you may wish to. Cheers!
Chart VWAP█ OVERVIEW
This indicator displays a Volume-Weighted Average Price anchored to the leftmost visible bar of the chart. It dynamically recalculates when the chart's visible bars change because you scroll or zoom your chart.
If you are not already familiar with VWAP, our Help Center will get you started. The typical VWAP is designed to be used on intraday charts, as it resets at the beginning of the day. Our Rolling VWAP , instead, resets on a rolling time window. You may also find the VWAP Auto Anchored built-in indicator worth a try.
█ HOW TO USE IT
Load the indicator on an active chart (see the Help Center if you don't know how). By default, it displays the chart's VWAP in orange and a simple average of the chart's visible close values in gray. This average can be used as a companion to the VWAP, since both are calculated from the same set of bars. The script's settings allow you to hide it.
You may also use the script's settings to enable the display of the chart's OHLC (open, high, low, close) levels and the values of the high and low. These are also calculated from the range of visible bars. You can complement the high and low lines with their price and their distance in percent from the chart's latest visible close . You can use the levels to quickly identify the distances from extreme points in the visible price range, as well as observe the visible chart's beginning and end prices.
█ NOTES FOR Pine Script™ CODERS
This script showcases three novelties:
• Dynamic recalculation on visible bars
• The VisibleChart library by PineCoders
• The new `anchor` parameter of ta.vwap()
Dynamic recalculation on visible bars
This script behaves in a novel way made possible by the recent introduction of two new built-in variables: chart.left_visible_bar_time and chart.right_visible_bar_time , which return the opening time of the leftmost and rightmost visible bars on the chart. These are only two of many new built-ins in the `chart.*` namespace. See this blog post for more information, or look up them up by typing "chart." in the Pine Script™ Reference Manual .
Any script using chart.left_visible_bar_time or chart.right_visible_bar_time acquires a unique property, which triggers its recalculation when traders scroll or zoom their chart, causing the range of visible bars to change. This new capability is what makes it possible for this script to calculate its VWAP on the chart's visible bars only, and dynamically recalculate if the user scrolls or zooms their chart.
This script is just a start to the party; endless uses for indicators that redraw on changes to the chart will no doubt emerge through the hands of our community's Pine Script™ programmers.
The VisibleChart library by PineCoders
The newly published VisibleChart library is designed to help programmers benefit from the new capabilities made possible by the fact that Pine Script™ code can now tell when it is executing on visible bars. The library's description, functions and example code will help programmers make the most of the new feature.
This script uses three of the library's functions:
• `PCvc.vVwap()` calculates a VWAP for visible bars.
• `PCvc.avg()` calculates the average of a source value for visible bars only. We use it to calculate the average close (the default source).
• `PCvc.chartXTimePct(25)` calculates a time value corresponding to 25% of the horizontal distance between visible bars, starting from the left.
The new `anchor` parameter of ta.vwap()
Our script also uses this new `anchor` parameter to reset the VWAP at the leftmost visible bar. See how simple the code is for the VisibleChart library's `vVwap()` function.
Look first. Then leap.
Volume X-ray [LucF]█ OVERVIEW
This tool analyzes the relative size of volume reported on intraday vs EOD (end of day) data feeds on historical bars. If you use volume data to make trading decisions, it can help you improve your understanding of its nature and quality, which is especially important if you trade on intraday timeframes.
I often mention, when discussing volume analysis, how it's important for traders to understand the volume data they are using: where it originates, what it includes and does not include. By helping you spot sizeable differences between volume reported on intraday and EOD data feeds for any given instrument, "Volume X-ray" can point you to instruments where you might want to research the causes of the difference.
█ CONCEPTS
The information used to build a chart's historical bars originates from data providers (exchanges, brokers, etc.) who often maintain distinct historical feeds for intraday and EOD timeframes. How volume data is assembled for intraday and EOD feeds varies with instruments, brokers and exchanges. Variations between the two feeds — or their absence — can be due to how instruments are traded in a particular sector and/or the volume reporting policy for the feeds you are using. Instruments from crypto and forex markets, for example, will often display similar volume on both feeds. Stocks will often display variations because block trades or other types of trades may not be included in their intraday volume data. Futures will also typically display variations. It is even possible that volume from different feeds may not be of the same nature, as you can get trade volume (market volume) on one feed and tick volume (transaction counts) on another. You will sometimes be able to find the details of what different feeds contain from the technical information provided by exchanges/brokers on their feeds. This is an example for the NASDAQ feeds . Once you determine which feeds you are using, you can look for the reporting specs for that feed. This is all research you will need to do on your own; "Volume X-ray" will not help you with that part.
You may elect to forego the deep dive in feed information and simply rely on the figure the indicator will calculate for the instruments you trade. One simple — and unproven — way to interpret "Volume X-ray" values is to infer that instruments with larger percentages of intraday/EOD volume ratios are more "democratic" because at intraday timeframes, you are seeing a greater proportion of the actual traded volume for the instrument. This could conceivably lead one to conclude that such volume data is more reliable than on an instrument where intraday volume accounts for only 3% of EOD volume, let's say.
Note that as intraday vs EOD variations exist for historical bars on some instruments, there will typically also be differences between the realtime feeds used on intraday vs 1D or greater timeframes for those same assets. Realtime reporting rules will often be different from historical feed reporting rules, so variations between realtime feeds will often be different from the variations between historical feeds for the same instrument. A deep dive in reporting rules will quickly reveal what a jungle they are for some instruments, yet it is the only way to really understand the volume information our charts display.
█ HOW TO USE IT
The script is very simple and has no inputs. Just add it to 1D charts and it will calculate the proportion of volume reported on the intraday feed over the EOD volume. The plots show the daily values for both volumes: the teal area is the EOD volume, the orange line is the intraday volume. A value representing the average, cumulative intraday/EOD volume percentage for the chart is displayed in the upper-right corner. Its background color changes with the percentage, with brightness levels proportional to the percentage for both the bull color (% >= 50) or the bear color (% < 50). When abnormal conditions are detected, such as missing volume of one kind or the other, a yellow background is used.
Daily and cumulative values are displayed in indicator values and the Data Window.
The indicator loads in a pane, but you can also use it in overlay mode by moving it on the chart with "Move to" in the script's "More" menu, and disabling the plot display from the "Settings/Style" tab.
█ LIMITATIONS
• The script will not run on timeframes >1D because it cannot produce useful values on them.
• The calculation of the cumulative average will vary on different intraday timeframes because of the varying number of days covered by the dataset.
Variations can also occur because of irregularities in reported volume data. That is the reason I recommend using it on 1D charts.
• The script only calculates on historical bars because in real time there is no distinction between intraday and EOD feeds.
• You will see plenty of special cases if you use the indicator on a variety of instruments:
• Some instruments have no intraday volume, while on others it's the opposite.
• Missing information will sometimes appear here and there on datasets.
• Some instruments have higher intraday than EOD volume.
Please do not ask me the reasons for these anomalies; it's your responsibility to find them. I supply a tool that will spot the anomalies for you — nothing more.
█ FOR PINE CODERS
• This script uses a little-known feature of request.security() , which allows us to specify `"1440"` for the `timeframe` argument.
When you do, data from the 1min intrabars of the historical intraday feed is aggregated over one day, as opposed to the usual EOD feed used with `"D"`.
• I use gaps on my request.security() calls. This is useful because at intraday timeframes I can cumulate non- na values only.
• I use fixnan() on some values. For those who don't know about it yet, it eliminates na values from a series, just like not using gaps will do in a request.security() call.
• I like how the new switch structure makes for more readable code than equivalent if structures.
• I wrote my script using the revised recommendations in the Style Guide from the Pine v5 User Manual.
• I use the new runtime.error() to throw an error when the script user tries to use a timeframe >1D.
Why? Because then, my request.security() calls would be returning values from the last 1D intrabar of the dilation of the, let's say, 1W chart bar.
This of course would be of no use whatsoever — and misleading. I encourage all Pine coders fetching HTF data to protect their script users in the same way.
As tool builders, it is our responsibility to shield unsuspecting users of our scripts from contexts where our calcs produce invalid results.
• While we're on the subject of accessing intrabar timeframes, I will add this to the intention of coders falling victim to what appears to be
a new misconception where the mere fact of using intrabar timeframes with request.security() is believed to provide some sort of edge.
This is a fallacy unless you are sending down functions specifically designed to mine values from request.security() 's intrabar context.
These coders do not seem to realize that:
• They are only retrieving information from the last intrabar of the chart bar.
• The already flawed behavior of their scripts on historical bars will not improve on realtime bars. It will actually worsen because in real time,
intrabars are not yet ordered sequentially as they are on historical bars.
• Alerts or strategy orders using intrabar information acquired through request.security() will be using flawed logic and data most of the time.
The situation reminds me of the mania where using Heikin-Ashi charts to backtest was all the rage because it produced magnificent — and flawed — results.
Trading is difficult enough when doing the right things; I hate to see traders infected by lethal beliefs.
Strive to sharpen your "herd immunity", as Lionel Shriver calls it. She also writes: "Be leery of orthodoxy. Hold back from shared cultural enthusiasms."
Be your own trader.
█ THANKS
This indicator would not exist without the invaluable insights from Tim, a member of the Pine team. Thanks Tim!